Functionality and dialogue experience are two important factors of task-oriented dialogue systems. Conventional approaches with closed schema (e.g., conversational semantic parsing) often fail as both the functionality and dialogue experience are strongly constrained by the underlying schema. We introduce a new paradigm for task-oriented dialogue - Dialog2API - to greatly expand the functionality and provide seamless dialogue experience. The conversational model interacts with the environment by generating and executing programs triggering a set of pre-defined APIs. The model also manages the dialogue policy and interact with the user through generating appropriate natural language responses. By allowing generating free-form programs, Dialog2API supports composite goals by combining different APIs, whereas unrestricted program revision provides natural and robust dialogue experience. To facilitate Dialog2API, the core model is provided with API documents, an execution environment and optionally some example dialogues annotated with programs. We propose an approach tailored for the Dialog2API, where the dialogue states are represented by a stack of programs, with most recently mentioned program on the top of the stack. Dialog2API can work with many application scenarios such as software automation and customer service. In this paper, we construct a dataset for AWS S3 APIs and present evaluation results of in-context learning baselines.
translated by 谷歌翻译
The security of artificial intelligence (AI) is an important research area towards safe, reliable, and trustworthy AI systems. To accelerate the research on AI security, the Artificial Intelligence Security Competition (AISC) was organized by the Zhongguancun Laboratory, China Industrial Control Systems Cyber Emergency Response Team, Institute for Artificial Intelligence, Tsinghua University, and RealAI as part of the Zhongguancun International Frontier Technology Innovation Competition (https://www.zgc-aisc.com/en). The competition consists of three tracks, including Deepfake Security Competition, Autonomous Driving Security Competition, and Face Recognition Security Competition. This report will introduce the competition rules of these three tracks and the solutions of top-ranking teams in each track.
translated by 谷歌翻译
Solving variational image segmentation problems with hidden physics is often expensive and requires different algorithms and manually tunes model parameter. The deep learning methods based on the U-Net structure have obtained outstanding performances in many different medical image segmentation tasks, but designing such networks requires a lot of parameters and training data, not always available for practical problems. In this paper, inspired by traditional multi-phase convexity Mumford-Shah variational model and full approximation scheme (FAS) solving the nonlinear systems, we propose a novel variational-model-informed network (denoted as FAS-Unet) that exploits the model and algorithm priors to extract the multi-scale features. The proposed model-informed network integrates image data and mathematical models, and implements them through learning a few convolution kernels. Based on the variational theory and FAS algorithm, we first design a feature extraction sub-network (FAS-Solution module) to solve the model-driven nonlinear systems, where a skip-connection is employed to fuse the multi-scale features. Secondly, we further design a convolution block to fuse the extracted features from the previous stage, resulting in the final segmentation possibility. Experimental results on three different medical image segmentation tasks show that the proposed FAS-Unet is very competitive with other state-of-the-art methods in qualitative, quantitative and model complexity evaluations. Moreover, it may also be possible to train specialized network architectures that automatically satisfy some of the mathematical and physical laws in other image problems for better accuracy, faster training and improved generalization.The code is available at \url{https://github.com/zhuhui100/FASUNet}.
translated by 谷歌翻译
在图像采集过程中,噪声无处不在。足够的降解通常是图像处理的重要第一步。近几十年来,深度神经网络(DNN)已被广泛用于图像denosising。大多数基于DNN的图像Denoising方法都需要大规模数据集或专注于监督设置,其中需要单个/对的干净图像或一组嘈杂的图像。这给图像采集过程带来了重大负担。此外,在有限规模的数据集上接受培训的DeNoiser可能会产生过度拟合。为了减轻这些问题,我们基于Tucker低级张量近似引入了一个新的自我监督框架,以供图像Denoising。借助提出的设计,我们能够以更少的参数来表征我们的Denoiser,并根据单个图像进行训练,从而大大提高了模型的推广性并降低了数据获取的成本。已经进行了合成和现实世界嘈杂图像的广泛实验。经验结果表明,我们提出的方法优于现有的非学习方法(例如,低通滤波器,非本地均值),单像无监督的DENOISER(例如DIP,DIP,NN+BM3D)在样本中和样本中评估户外样本数据集。提出的方法甚至通过一些有监督的方法(例如DNCNN)实现了可比的性能。
translated by 谷歌翻译
作为包含结构和特征信息的特殊信息载体,图被广泛用于图挖掘中,例如图形神经网络(GNNS)。但是,在某些实际情况下,图形数据分别存储在多个分布式各方中,由于利益冲突,可能不会直接共享。因此,提出了联合图神经网络来解决此类数据孤岛问题,同时保留各方(或客户)的隐私。然而,各方之间的不同图形数据分布(称为统计异质性)可能会降低诸如fedAvg之类的幼稚联合学习算法的性能。在本文中,我们提出了一个基于自我图形的联合图形学习框架Fedego,以应对上述挑战,每个客户将在此培训其本地模型,同时也为全球模型的培训做出贡献。 Fedego应用图形上的自我图形来充分利用结构信息,并利用混音来实现隐私问题。为了处理统计异质性,我们将个性化整合到学习中,并提出一种自适应混合系数策略,使客户能够实现最佳个性化。广泛的实验结果和深入分析证明了联邦的有效性。
translated by 谷歌翻译
由于生成对抗网络(GAN)的突破,3D可控制的肖像合成已大大提高。但是,用精确的3D控制操纵现有的面部图像仍然具有挑战性。虽然连接gan倒置和3D感知,但噪声到图像是一种直接的解决方案,但它效率低下,可能导致编辑质量明显下降。为了填补这一空白,我们提出了3D-FM GAN,这是一个专门为3D可控制的面部操作设计的新型有条件GAN框架,并且在端到端学习阶段后不需要任何调整。通过小心地编码输入面图像和3D编辑的基于物理的渲染,我们的图像生成器提供了高质量,具有身份的3D控制面部操纵。为了有效地学习这种新颖的框架,我们制定了两种基本的训练策略和一种新颖的乘法共同调制体系结构,可在天真的方案上显着改善。通过广泛的评估,我们表明我们的方法在各种任务上的表现优于先前的艺术,具有更好的编辑性,更强的身份保存和更高的照片真实性。此外,我们在大型姿势编辑和室外图像上展示了设计更好的概括性。
translated by 谷歌翻译
图像的美学评估可以分为两种主要形式:数值评估和语言评估。照片的美学标题是已解决的审美语言评估的唯一任务。在本文中,我们提出了一项美学评估的新任务:图像的美学视觉和回答(AVQA)。如果我们提出图像美学问题,模型可以预测答案。我们使用\ textit {www.flickr.com}的图像。目标QA对由提出的美学属性分析算法产生。此外,我们引入了主观质量检查对,这些对从审美数字标签和来自大规模培训模型的情感分析转换。我们构建了第一个回答数据集AESVQA的审美视觉问题,其中包含72,168个高质量图像和324,756对美学问题。已经提出并证明了两种调整数据分布的方法,以提高现有模型的准确性。这是解决美学VQA任务并将主观性引入VQA任务的第一项工作。实验结果表明,我们的方法在这项新任务上的表现优于其他VQA模型。
translated by 谷歌翻译
近年来,图像生成在提高图像质量方面取得了长足的进步,从而产生了高保真性。另外,最近还有一些建筑设计,它使甘恩能够毫不客气地学习不同层中表示的语义属性。但是,对于与人类美学更一致的面部图像仍然缺乏研究。基于Eigengan [He等,ICCV 2021],我们将增强学习的技术构建到Eigengan的发电机中。该代理商试图弄清楚如何将生成的人脸的语义属性更改为更可取的面部。为此,我们训练了一种可以进行面部美容预测的美学评分模型。我们还可以利用此评分模型来分析面部属性和美学得分之间的相关性。从经验上讲,使用增强学习的现成技术无法正常工作。因此,相反,我们提出了一种新的变体,该变体纳入了近年来在强化学习社区中出现的成分。与原始生成的图像相比,调整后的图像显示了有关各种属性的明确区别。实验结果使用思维镜,显示了所提出的方法的有效性。更改的面部图像通常更具吸引力,并有明显改善的美学水平。
translated by 谷歌翻译
逆文本归一化(ITN)用于将自动语音识别(ASR)系统的口语输出转换为书面形式。传统手工制作的ITN规则可以复杂地转录和维护。同时,神经建模方法需要与ASR系统相同或相似的域(内域数据)中的质量大规模口语写作示例。这两种方法都需要昂贵且复杂的注释。在本文中,我们提出了一种数据增强技术,该技术可有效地从室外文本数据中产生丰富的口语写入数字对,并以最少的人类注释。我们从经验上证明,使用我们的数据增强技术训练的ITN模型始终超过ITN模型,该模型仅使用14.44%的总体准确性,仅在所有数字表面(例如红衣主教,货币和分数)上使用内域数据进行训练。
translated by 谷歌翻译
图形离群值检测是一项具有许多应用程序的新兴但至关重要的机器学习任务。尽管近年来算法扩散,但缺乏标准和统一的绩效评估设置限制了它们在现实世界应用中的进步和使用。为了利用差距,我们(据我们所知)(据我们所知)第一个全面的无监督节点离群值检测基准为unod,并带有以下亮点:(1)评估骨架从经典矩阵分解到最新图形神经的骨架的14个方法网络; (2)在现实世界数据集上使用不同类型的注射异常值和自然异常值对方法性能进行基准测试; (3)通过在不同尺度的合成图上使用运行时和GPU存储器使用算法的效率和可扩展性。基于广泛的实验结果的分析,我们讨论了当前渠道方法的利弊,并指出了多个关键和有希望的未来研究方向。
translated by 谷歌翻译